15 research outputs found

    A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?

    Full text link
    Artificial intelligence (AI) models are increasingly finding applications in the field of medicine. Concerns have been raised about the explainability of the decisions that are made by these AI models. In this article, we give a systematic analysis of explainable artificial intelligence (XAI), with a primary focus on models that are currently being used in the field of healthcare. The literature search is conducted following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) standards for relevant work published from 1 January 2012 to 02 February 2022. The review analyzes the prevailing trends in XAI and lays out the major directions in which research is headed. We investigate the why, how, and when of the uses of these XAI models and their implications. We present a comprehensive examination of XAI methodologies as well as an explanation of how a trustworthy AI can be derived from describing AI models for healthcare fields. The discussion of this work will contribute to the formalization of the XAI field.Comment: 15 pages, 3 figures, accepted for publication in the IEEE Transactions on Artificial Intelligenc

    Rethinking densely connected convolutional networks for diagnosing infectious diseases.

    Get PDF
    Due to its high transmissibility, the COVID-19 pandemic has placed an unprecedented burden on healthcare systems worldwide. X-ray imaging of the chest has emerged as a valuable and cost-effective tool for detecting and diagnosing COVID-19 patients. In this study, we developed a deep learning model using transfer learning with optimized DenseNet-169 and DenseNet-201 models for three-class classification, utilizing the Nadam optimizer. We modified the traditional DenseNet architecture and tuned the hyperparameters to improve the model's performance. The model was evaluated on a novel dataset of 3312 X-ray images from publicly available datasets, using metrics such as accuracy, recall, precision, F1-score, and the area under the receiver operating characteristics curve. Our results showed impressive detection rate accuracy and recall for COVID-19 patients, with 95.98% and 96% achieved using DenseNet-169 and 96.18% and 99% using DenseNet-201. Unique layer configurations and the Nadam optimization algorithm enabled our deep learning model to achieve high rates of accuracy not only for detecting COVID-19 patients but also for identifying normal and pneumonia-affected patients. The mode'ls ability to detect lung problems early on, as well as its low false-positive and false-negative rates, suggest that it has the potential to serve as a reliable diagnostic tool for a variety of lung diseases

    LDDNet: a deep learning framework for the diagnosis of infectious lung diseases.

    Get PDF
    This paper proposes a new deep learning (DL) framework for the analysis of lung diseases, including COVID-19 and pneumonia, from chest CT scans and X-ray (CXR) images. This framework is termed optimized DenseNet201 for lung diseases (LDDNet). The proposed LDDNet was developed using additional layers of 2D global average pooling, dense and dropout layers, and batch normalization to the base DenseNet201 model. There are 1024 Relu-activated dense layers and 256 dense layers using the sigmoid activation method. The hyper-parameters of the model, including the learning rate, batch size, epochs, and dropout rate, were tuned for the model. Next, three datasets of lung diseases were formed from separate open-access sources. One was a CT scan dataset containing 1043 images. Two X-ray datasets comprising images of COVID-19-affected lungs, pneumonia-affected lungs, and healthy lungs exist, with one being an imbalanced dataset with 5935 images and the other being a balanced dataset with 5002 images. The performance of each model was analyzed using the Adam, Nadam, and SGD optimizers. The best results have been obtained for both the CT scan and CXR datasets using the Nadam optimizer. For the CT scan images, LDDNet showed a COVID-19-positive classification accuracy of 99.36%, a 100% precision recall of 98%, and an F1 score of 99%. For the X-ray dataset of 5935 images, LDDNet provides a 99.55% accuracy, 73% recall, 100% precision, and 85% F1 score using the Nadam optimizer in detecting COVID-19-affected patients. For the balanced X-ray dataset, LDDNet provides a 97.07% classification accuracy. For a given set of parameters, the performance results of LDDNet are better than the existing algorithms of ResNet152V2 and XceptionNet

    Enhancing brain tumor classification with transfer learning across multiple classes: an in-depth analysis.

    Get PDF
    This study focuses on leveraging data-driven techniques to diagnose brain tumors through magnetic resonance imaging (MRI) images. Utilizing the rule of deep learning (DL), we introduce and fine-tune two robust frameworks, ResNet 50 and Inception V3, specifically designed for the classification of brain MRI images. Building upon the previous success of ResNet 50 and Inception V3 in classifying other medical imaging datasets, our investigation encompasses datasets with distinct characteristics, including one with four classes and another with two. The primary contribution of our research lies in the meticulous curation of these paired datasets. We have also integrated essential techniques, including Early Stopping and ReduceLROnPlateau, to refine the model through hyperparameter optimization. This involved adding extra layers, experimenting with various loss functions and learning rates, and incorporating dropout layers and regularization to ensure model convergence in predictions. Furthermore, strategic enhancements, such as customized pooling and regularization layers, have significantly elevated the accuracy of our models, resulting in remarkable classification accuracy. Notably, the pairing of ResNet 50 with the Nadam optimizer yields extraordinary accuracy rates, reaching 99.34% for gliomas, 93.52% for meningiomas, 98.68% for non-tumorous images, and 97.70% for pituitary tumors. These results underscore the transformative potential of our custom-made approach, achieving an aggregate testing accuracy of 97.68% for these four distinct classes. In a two-class dataset, Resnet 50 with the Adam optimizer excels, demonstrating better precision, recall, F1 score, and an overall accuracy of 99.84%. Moreover, it attains perfect per-class accuracy of 99.62% for 'Tumor Positive' and 100% for 'Tumor Negative', underscoring a remarkable advancement in the realm of brain tumor categorization. This research underscores the innovative possibilities of DL models and our specialized optimization methods in the domain of diagnosing brain cancer from MRI images

    Edge-Based and Prediction-Based Transformations for Lossless Image Compression

    No full text
    Pixelated images are used to transmit data between computing devices that have cameras and screens. Significant compression of pixelated images has been achieved by an “edge-based transformation and entropy coding” (ETEC) algorithm recently proposed by the authors of this paper. The study of ETEC is extended in this paper with a comprehensive performance evaluation. Furthermore, a novel algorithm termed “prediction-based transformation and entropy coding” (PTEC) is proposed in this paper for pixelated images. In the first stage of the PTEC method, the image is divided hierarchically to predict the current pixel using neighboring pixels. In the second stage, the prediction errors are used to form two matrices, where one matrix contains the absolute error value and the other contains the polarity of the prediction error. Finally, entropy coding is applied to the generated matrices. This paper also compares the novel ETEC and PTEC schemes with the existing lossless compression techniques: “joint photographic experts group lossless” (JPEG-LS), “set partitioning in hierarchical trees” (SPIHT) and “differential pulse code modulation” (DPCM). Our results show that, for pixelated images, the new ETEC and PTEC algorithms provide better compression than other schemes. Results also show that PTEC has a lower compression ratio but better computation time than ETEC. Furthermore, when both compression ratio and computation time are taken into consideration, PTEC is more suitable than ETEC for compressing pixelated as well as non-pixelated images

    RBFK cipher: a randomized butterfly architecture-based lightweight block cipher for IoT devices in the edge computing environment

    No full text
    Abstract Internet security has become a major concern with the growing use of the Internet of Things (IoT) and edge computing technologies. Even though data processing is handled by the edge server, sensitive data is generated and stored by the IoT devices, which are subject to attack. Since most IoT devices have limited resources, standard security algorithms such as AES, DES, and RSA hamper their ability to run properly. In this paper, a lightweight symmetric key cipher termed randomized butterfly architecture of fast Fourier transform for key (RBFK) cipher is proposed for resource-constrained IoT devices in the edge computing environment. The butterfly architecture is used in the key scheduling system to produce strong round keys for five rounds of the encryption method. The RBFK cipher has two key sizes: 64 and 128 bits, with a block size of 64 bits. The RBFK ciphers have a larger avalanche effect due to the butterfly architecture ensuring strong security. The proposed cipher satisfies the Shannon characteristics of confusion and diffusion. The memory usage and execution cycle of the RBFK cipher are assessed using the fair evaluation of the lightweight cryptographic systems (FELICS) tool. The proposed ciphers were also implemented using MATLAB 2021a to test key sensitivity by analyzing the histogram, correlation graph, and entropy of encrypted and decrypted images. Since the RBFK ciphers with minimal computational complexity provide better security than recently proposed competing ciphers, these are suitable for IoT devices in an edge computing environment
    corecore